422 research outputs found
Private Law Beyond the State? Europeanization, Globalization, Privatization
Although the changing relation between private law and the state has become the subject of many debates, these debates are often unsatisfactory. Concepts like \u27law\u27, \u27private law\u27, and \u27globalization\u27 have unclear and shifting meanings; discussions are confined to specific questions and do not connect with similar discussions taking place elsewhere. In order to initiate the necessary broader approach, this article brings together the pertinent themes and aspects from various debates. It proposes a conceptual clarification of key notions in the debate- private law, state, Europeanization, globalization, and privatization - that should be of use beyond the immediate purposes of the rest of the article. And it suggests how one should analyze and categorize both the problems the modern developments create and the solutions that these problems might call for. It does not attempt to analyze which solution is the best one. But in unveiling common structures, both within and between the various debates, the article should help significantly in providing the further discussion of these solutions with a more rational framework
Introduction: Beyond the State? Rethinking Private Law
Introduction to an issue of the journal that brings together the papers presented, as revised by the participants, at a conference held at the Max Planck Institute for Comparative and International Private Law in Hamburg, Germany in the summer of 2007
Verification of Uncertain POMDPs Using Barrier Certificates
We consider a class of partially observable Markov decision processes
(POMDPs) with uncertain transition and/or observation probabilities. The
uncertainty takes the form of probability intervals. Such uncertain POMDPs can
be used, for example, to model autonomous agents with sensors with limited
accuracy, or agents undergoing a sudden component failure, or structural damage
[1]. Given an uncertain POMDP representation of the autonomous agent, our goal
is to propose a method for checking whether the system will satisfy an optimal
performance, while not violating a safety requirement (e.g. fuel level,
velocity, and etc.). To this end, we cast the POMDP problem into a switched
system scenario. We then take advantage of this switched system
characterization and propose a method based on barrier certificates for
optimality and/or safety verification. We then show that the verification task
can be carried out computationally by sum-of-squares programming. We illustrate
the efficacy of our method by applying it to a Mars rover exploration example.Comment: 8 pages, 4 figure
High-level Counterexamples for Probabilistic Automata
Providing compact and understandable counterexamples for violated system
properties is an essential task in model checking. Existing works on
counterexamples for probabilistic systems so far computed either a large set of
system runs or a subset of the system's states, both of which are of limited
use in manual debugging. Many probabilistic systems are described in a guarded
command language like the one used by the popular model checker PRISM. In this
paper we describe how a smallest possible subset of the commands can be
identified which together make the system erroneous. We additionally show how
the selected commands can be further simplified to obtain a well-understandable
counterexample
Robustness Verification for Classifier Ensembles
We give a formal verification procedure that decides whether a classifier
ensemble is robust against arbitrary randomized attacks. Such attacks consist
of a set of deterministic attacks and a distribution over this set. The
robustness-checking problem consists of assessing, given a set of classifiers
and a labelled data set, whether there exists a randomized attack that induces
a certain expected loss against all classifiers. We show the NP-hardness of the
problem and provide an upper bound on the number of attacks that is sufficient
to form an optimal randomized attack. These results provide an effective way to
reason about the robustness of a classifier ensemble. We provide SMT and MILP
encodings to compute optimal randomized attacks or prove that there is no
attack inducing a certain expected loss. In the latter case, the classifier
ensemble is provably robust. Our prototype implementation verifies multiple
neural-network ensembles trained for image-classification tasks. The
experimental results using the MILP encoding are promising both in terms of
scalability and the general applicability of our verification procedure
- âŠ